777,480 research outputs found

    Object grasping and manipulation in capuchin monkeys (genera Cebus and Sapajus)

    Get PDF
    The abilities to perform skilled hand movements and to manipulate objects dexterously are landmarks in the evolution of primates. The study of how primates use their hands to grasp and manipulate objects in accordance with their needs sheds light on how these species are physically and mentally equipped to deal with the problems they encounter in their daily life. We report data on capuchin monkeys, highly manipulative platyrrhine species that usually spend a great deal of time in active manipulation to search for food and to prepare it for ingestion. Our aim is to provide an overview of current knowledge on the ability of capuchins to grasp and manipulate objects, with a special focus on how these species express their cognitive potential through manual behaviour. Data on the ability of capuchins to move their hands and on the neural correlates sustaining their actions are reported, as are findings on the manipulative ability of capuchins to anticipate future actions and to relate objects to other objects and substrates. The manual behaviour of capuchins is considered in different domains, such as motor planning, extractive foraging and tool use, in both captive and natural settings. Anatomofunctional and behavioural similarities to and differences from other haplorrhine species regarding manual dexterity are also discussed

    Learning Intelligent Dialogs for Bounding Box Annotation

    Get PDF
    We introduce Intelligent Annotation Dialogs for bounding box annotation. We train an agent to automatically choose a sequence of actions for a human annotator to produce a bounding box in a minimal amount of time. Specifically, we consider two actions: box verification, where the annotator verifies a box generated by an object detector, and manual box drawing. We explore two kinds of agents, one based on predicting the probability that a box will be positively verified, and the other based on reinforcement learning. We demonstrate that (1) our agents are able to learn efficient annotation strategies in several scenarios, automatically adapting to the image difficulty, the desired quality of the boxes, and the detector strength; (2) in all scenarios the resulting annotation dialogs speed up annotation compared to manual box drawing alone and box verification alone, while also outperforming any fixed combination of verification and drawing in most scenarios; (3) in a realistic scenario where the detector is iteratively re-trained, our agents evolve a series of strategies that reflect the shifting trade-off between verification and drawing as the detector grows stronger.Comment: This paper appeared at CVPR 201

    The co-development of manual and vocal activity in infants

    Get PDF
    Manual and vocal actions in humans are coupled throughout the lifespan, from the anticipatory opening of the mouth as the hand moves to meet it in natal development to the more sophisticated co-expressive gesture of the proficient communicator (Iverson & Thelen, 1999). By adulthood, the systems supporting both speech and manual actions of gesture are so wholly integrated that the expression of both actions together is seamless and effortless (Gentilucci & Nicoladis, 2008). Both systems, though controlled by different muscles moving different articulators, exhibit parallels in their development and organization (Meier & Willerman, 1995). The manual control supporting gesture emerges earlier than the vocal control supporting speech (Ejiri & Masataka, 2001), and the actions of the hands and arms may encourage organization and patterns of vocal control (Iverson & Fagan, 2004). No research has yet shown the nature of this manual development in the context of vocal development. This study investigates the emergence and practice of manual configurations during vocal and linguistic development in eight typically developing infants. By observing the manual system only during vocal actions, while the participants progress through babble but before referential word use, this study demonstrates the nature of the development between these systems before being structured by language. These results illustrate the unique coupling of the vocal and motor systems and demonstrate the existence of manual configurations analogous to the practiced vocal patterns that support the development of language

    Slip of the tongue: implications for evolution and language development

    Get PDF
    prevailing theory regarding the evolution of language implicates a gestural stage prior to the emergence of speech. In support of a transition of human language from a gestural to a vocal system, articulation of the hands and the tongue are underpinned by overlapping left hemisphere dominant neural regions. Behavioral studies demonstrate that human adults perform sympathetic mouth actions in imitative synchrony with manual actions. Additionally, right-handedness for precision manual actions in children has been corre- lated with the typical development of language, while a lack of hand bias has been associ- ated with psychopathology. It therefore stands to reason that sympathetic mouth actions during fine precision motor action of the hands may be lateralized. We employed a fine-grained behavioral coding paradigm to provide the first investigation of tongue pro- trusions in typically developing 4-year old children. Tongue protrusions were investigated across a range of cognitive tasks that required varying degrees of manual action: precision motor action, gross motor action and no motor actions. The rate of tongue protrusions was influenced by the motor requirements of the task and tongue protrusions were signifi- cantly right-biased for only precision manual motor action (p < .001). From an evolutionary perspective, tongue protrusions can drive new investigations regarding how an early human communication system transitioned from hand to mouth. From a developmental perspective, the present study may serve to reveal patterns of tongue protrusions during the motor development of typically developing children

    Localizing Actions from Video Labels and Pseudo-Annotations

    Get PDF
    The goal of this paper is to determine the spatio-temporal location of actions in video. Where training from hard to obtain box annotations is the norm, we propose an intuitive and effective algorithm that localizes actions from their class label only. We are inspired by recent work showing that unsupervised action proposals selected with human point-supervision perform as well as using expensive box annotations. Rather than asking users to provide point supervision, we propose fully automatic visual cues that replace manual point annotations. We call the cues pseudo-annotations, introduce five of them, and propose a correlation metric for automatically selecting and combining them. Thorough evaluation on challenging action localization datasets shows that we reach results comparable to results with full box supervision. We also show that pseudo-annotations can be leveraged during testing to improve weakly- and strongly-supervised localizers.Comment: BMV
    • …
    corecore